深度学习(DL)技术已被广泛用于医学图像分类。大多数基于DL的分类网络通常是层次结构化的,并通过最小化网络末尾测量的单个损耗函数而进行了优化。但是,这种单一的损失设计可能会导致优化一个特定的感兴趣价值,但无法利用中间层的信息特征,这些特征可能会受益于分类性能并降低过度拟合的风险。最近,辅助卷积神经网络(AUXCNNS)已在传统分类网络之上采用,以促进中间层的培训,以提高分类性能和鲁棒性。在这项研究中,我们提出了一个基于对抗性学习的AUXCNN,以支持对医学图像分类的深神经网络的培训。我们的AUXCNN分类框架采用了两项主要创新。首先,所提出的AUXCNN体系结构包括图像发生器和图像鉴别器,用于为医学图像分类提取更多信息图像特征,这是由生成对抗网络(GAN)的概念及其在近似目标数据分布方面令人印象深刻的能力的动机。其次,混合损失函数旨在通过合并分类网络和AUXCNN的不同目标来指导模型训练,以减少过度拟合。全面的实验研究表明,提出的模型的分类表现出色。研究了与网络相关因素对分类性能的影响。
translated by 谷歌翻译
To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was done using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The classification step eliminated the majority of residual false positives, and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference. When compared to the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning.
translated by 谷歌翻译
背景:机器学习(ML)可以实现有效的自动测试生成。目的:我们表征了新兴研究,检查测试实践,研究人员目标,应用的ML技术,评估和挑战。方法:我们对97个出版物的样本进行系统文献综述。结果:ML生成系统,GUI,单位,性能和组合测试的输入或改善现有生成方法的性能。 ML还用于生成测试判决,基于属性的和预期的输出序列。经常基于神经网络和强化学习的监督学习通常是基于Q学习的 - 很普遍,并且某些出版物还采用了无监督或半监督的学习。使用传统的测试指标和与ML相关的指标(例如准确性)评估(半/非 - )监督方法,而经常使用与奖励功能相关的测试指标来评估强化学习。结论:工作到尽头表现出巨大的希望,但是在培训数据,再探术,可伸缩性,评估复杂性,所采用的ML算法以及如何应用 - 基准和可复制性方面存在公开挑战。我们的发现可以作为该领域研究人员的路线图和灵感。
translated by 谷歌翻译
Thin-cap fibroatheroma (TCFA) and plaque rupture have been recognized as the most frequent risk factor for thrombosis and acute coronary syndrome. Intravascular optical coherence tomography (IVOCT) can identify TCFA and assess cap thickness, which provides an opportunity to assess plaque vulnerability. We developed an automated method that can detect lipidous plaque and assess fibrous cap thickness in IVOCT images. This study analyzed a total of 4,360 IVOCT image frames of 77 lesions among 41 patients. To improve segmentation performance, preprocessing included lumen segmentation, pixel-shifting, and noise filtering on the raw polar (r, theta) IVOCT images. We used the DeepLab-v3 plus deep learning model to classify lipidous plaque pixels. After lipid detection, we automatically detected the outer border of the fibrous cap using a special dynamic programming algorithm and assessed the cap thickness. Our method provided excellent discriminability of lipid plaque with a sensitivity of 85.8% and A-line Dice coefficient of 0.837. By comparing lipid angle measurements between two analysts following editing of our automated software, we found good agreement by Bland-Altman analysis (difference 6.7+/-17 degree; mean 196 degree). Our method accurately detected the fibrous cap from the detected lipid plaque. Automated analysis required a significant modification for only 5.5% frames. Furthermore, our method showed a good agreement of fibrous cap thickness between two analysts with Bland-Altman analysis (4.2+/-14.6 micron; mean 175 micron), indicating little bias between users and good reproducibility of the measurement. We developed a fully automated method for fibrous cap quantification in IVOCT images, resulting in good agreement with determinations by analysts. The method has great potential to enable highly automated, repeatable, and comprehensive evaluations of TCFAs.
translated by 谷歌翻译
背景:软件测试领域正在增长和迅速发展。目的:基于分配给出版物的关键字,我们试图确定主要的研究主题,并了解它们的联系和发展方式。方法:我们应用共同字分析将测试研究的拓扑结构映射为一个网络,在该网络中,由作者分配的关键字通过表明出版物中共发生的边缘连接。关键字是根据边缘密度和连接频率聚类的。我们检查了最受欢迎的关键字,将集群汇总到高级研究主题中,检查主题如何连接并检查该领域的变化。结果:测试研究可以分为16个高级主题和18个子主题。创建指导,自动化测试生成,进化和维护以及测试魔术与其他主题具有特别牢固的联系,突出了其多学科性质。新兴关键字与Web和移动应用程序,机器学习,能源消耗,自动化程序修复和测试生成有关,而在Web应用程序,测试隔壁和机器学习之间形成了许多主题之间的新兴联系。随机和基于需求的测试显示潜在下降。结论:我们的观察,建议和地图数据为探索挑战和联系的领域和灵感提供了更深入的了解。
translated by 谷歌翻译